研究人员通常会采用数值方法来理解和预测海洋动力学,这是掌握环境现象的关键任务。在地形图很复杂,有关基础过程的知识不完整或应用程序至关重要的情况下,此类方法可能不适合。另一方面,如果观察到海洋动力学,则可以通过最近的机器学习方法来利用它们。在本文中,我们描述了一种数据驱动的方法,可以预测环境变量,例如巴西东南海岸的Santos-Sao Vicente-Bertioga estuarine系统的当前速度和海面高度。我们的模型通过连接最新的序列模型(LSTM和Transformers)以及关系模型(图神经网络)来利用时间和空间归纳偏见,以学习时间特征和空间特征,观察站点之间共享的关系。我们将结果与桑托斯运营预测系统(SOFS)进行比较。实验表明,我们的模型可以实现更好的结果,同时保持灵活性和很少的领域知识依赖性。
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
这项工作提出了基于差异自动编码器卷积编码器产生的特征的概率分类器的内核选择方法。特别是,开发的方法允许选择最相关的潜在变量子集。在拟议的实现中,每个潜在变量都是从与最后一个编码器的卷积层的单个内核相关的分布中取样的,因为为每个内核创建了个体分布。因此,在采样的潜在变量上选择相关功能使得可以执行内核选择,从而过滤非信息性特征和内核。这样的导致模型参数数量减少。评估包装器和过滤器方法以进行特征选择。第二个特别相关,因为它仅基于内核的分布。通过测量所有分布之间的kullback-leibler差异来评估,假设其分布更相似的内核可以被丢弃。该假设得到了证实,因为观察到最相似的内核不会传达相关信息,并且可以去除。结果,所提出的方法适用于开发用于资源约束设备的应用程序。
translated by 谷歌翻译
这项工作提出了一种用于概率分类器的新算法的Proboost。该算法使用每个训练样本的认知不确定性来确定最具挑战性/不确定的样本。然后,对于下一个弱学习者,这些样本的相关性就会增加,产生序列,该序列逐渐侧重于发现具有最高不确定性的样品。最后,将弱学习者的输出组合成分类器的加权集合。提出了三种方法来操纵训练集:根据弱学习者估计的不确定性,取样,过采样和加权训练样本。此外,还研究了有关集成组合的两种方法。本文所考虑的弱学习者是标准的卷积神经网络,而不确定性估计使用的概率模型则使用变异推理或蒙特卡洛辍学。在MNIST基准数据集上进行的实验评估表明,ProbOOST可以显着改善性能。通过评估这项工作中提出的相对可实现的改进,进一步强调了结果,该指标表明,只有四个弱学习者的模型导致该指标的改进超过12%(出于准确性,灵敏度或特异性),与没有探针的模型相比。
translated by 谷歌翻译
深度学习变压器具有大幅改进的系统,可以自动回答自然语言的问题。但是,不同的问题需要不同的答案技术。在这里,我们建议,构建和验证一个集成不同模块以回答两种不同类型的查询的体系结构。我们的体系结构采用自由形式的自然语言文本,并将其分类为将其发送给一个神经问题,或者将自然语言解析器发送给SQL。我们使用一些可用于语言的主要工具以及翻译培训和测试数据集实现了葡萄牙语的完整系统。实验表明,我们的系统以高精度(超过99 \%)选择了适当的答案方法,从而验证了模块化的问答策略。
translated by 谷歌翻译
生物医学决策涉及来自不同传感器或来自不同信道的多个信号处理。在这两种情况下,信息融合发挥着重要作用。在脑电图循环交替模式中,在这项工作中进行了深度学习的脑电图通道的特征级融合。通过两个优化算法,即遗传算法和粒子群优化优化了频道选择,融合和分类程序。通过融合来自多个脑电图信道的信息来评估开发的方法,用于夜间胸癫痫和没有任何神经疾病的患者的患者,与其他艺术艺术的工作相比,这在显着更具挑战性。结果表明,两种优化算法都选择了一种具有类似特征级融合的可比结构,包括三个脑电图通道,这与帽协议一致,以确保多个通道的唤起帽检测。此外,两种优化模型在接收器的工作特性曲线下达到了0.82的一个区域,平均精度为77%至79%,这是在专业协议的上部范围内的结果。尽管数据集是困难的数据集,所提出的方法仍处于最佳状态的上层,并且具有困难的数据集,并且具有在不需要任何手动过程的情况下提供全自动分析的优点。最终,模型显示出抗噪声和有弹性的多声道损耗。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Attention mechanisms form a core component of several successful deep learning architectures, and are based on one key idea: ''The output depends only on a small (but unknown) segment of the input.'' In several practical applications like image captioning and language translation, this is mostly true. In trained models with an attention mechanism, the outputs of an intermediate module that encodes the segment of input responsible for the output is often used as a way to peek into the `reasoning` of the network. We make such a notion more precise for a variant of the classification problem that we term selective dependence classification (SDC) when used with attention model architectures. Under such a setting, we demonstrate various error modes where an attention model can be accurate but fail to be interpretable, and show that such models do occur as a result of training. We illustrate various situations that can accentuate and mitigate this behaviour. Finally, we use our objective definition of interpretability for SDC tasks to evaluate a few attention model learning algorithms designed to encourage sparsity and demonstrate that these algorithms help improve interpretability.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译